LiDAR mapping is important yet challenging in self-driving and mobile robotics. To tackle such a global point cloud registration problem, DeepMapping converts the complex map estimation into a self-supervised training of simple deep networks. Despite its broad convergence range on small datasets, DeepMapping still cannot produce satisfactory results on large-scale datasets with thousands of frames. This is due to the lack of loop closures and exact cross-frame point correspondences, and the slow convergence of its global localization network. We propose DeepMapping2 by adding two novel techniques to address these issues: (1) organization of training batch based on map topology from loop closing, and (2) self-supervised local-to-global point consistency loss leveraging pairwise registration. Our experiments and ablation studies on public datasets (KITTI, NCLT, and Nebula) demonstrate the effectiveness of our method. Our code will be released.
translated by 谷歌翻译
The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
Recent advances on text-to-image generation have witnessed the rise of diffusion models which act as powerful generative models. Nevertheless, it is not trivial to exploit such latent variable models to capture the dependency among discrete words and meanwhile pursue complex visual-language alignment in image captioning. In this paper, we break the deeply rooted conventions in learning Transformer-based encoder-decoder, and propose a new diffusion model based paradigm tailored for image captioning, namely Semantic-Conditional Diffusion Networks (SCD-Net). Technically, for each input image, we first search the semantically relevant sentences via cross-modal retrieval model to convey the comprehensive semantic information. The rich semantics are further regarded as semantic prior to trigger the learning of Diffusion Transformer, which produces the output sentence in a diffusion process. In SCD-Net, multiple Diffusion Transformer structures are stacked to progressively strengthen the output sentence with better visional-language alignment and linguistical coherence in a cascaded manner. Furthermore, to stabilize the diffusion process, a new self-critical sequence training strategy is designed to guide the learning of SCD-Net with the knowledge of a standard autoregressive Transformer model. Extensive experiments on COCO dataset demonstrate the promising potential of using diffusion models in the challenging image captioning task. Source code is available at \url{https://github.com/YehLi/xmodaler/tree/master/configs/image_caption/scdnet}.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
半监督异常检测(AD)是一种数据挖掘任务,旨在从部分标记的数据集中学习功能,以帮助检测异常值。在本文中,我们将现有的半监督AD方法分为两类:无监督和基于监督的基于监督的,并指出其中大多数人对标记数据的利用不足和未经标记的数据的探索不足。为了解决这些问题,我们提出了深度的异常检测和搜索(DADS),该检测(DADS)应用了增强学习(RL)以平衡剥削和探索。在培训过程中,代理商通过层次结构的数据集搜索可能的异常情况,并使用搜索异常来增强性能,从本质上讲,这本质上从合奏学习的想法中汲取了教训。在实验上,我们将DAD与利用标记已知异常的标记为检测其他已知异常和未知异常的几种最新方法进行了比较。结果表明,爸爸可以从未标记的数据中有效,精确地搜索异常,并向它们学习,从而实现良好的性能。
translated by 谷歌翻译
使用深网的Visual Place识别(VPR)已达到最先进的性能。但是,他们中的大多数都需要采用地面真相传感器姿势的培训,以获取每个观察的空间邻里的正面和负面样本,以进行监督学习。当不可用的信息不可用时,尽管我们发现其性能次优训练,但可以利用从顺序收集的数据流中的时间社区进行自我监督训练。受嘈杂的标签学习的启发,我们提出了一个名为\ textit {tf-vpr}的新颖的自我监督框架,该框架使用时间社区和可学习的特征邻域来发现未知的空间社区。我们的方法遵循一个迭代训练范式,该范式在以下方面交替:(1)与数据增强的表示学习,(2)正设置扩展以包括当前的特征空间邻居,以及(3)通过几何验证进行正面集合。我们在模拟数据集和真实数据集上进行了全面的实验,将RGB图像或点云作为输入进行。结果表明,我们的方法在召回率,稳健性和标题多样性方面优于我们的基准,这是我们为VPR提出的新型指标。可以在https://ai4ce.github.io/tf-vpr/上找到我们的代码和数据集。
translated by 谷歌翻译
糖尿病性视网膜病(DR)已成为工人衰老人视力障碍的主要原因之一,在全球范围内是一个严重的问题。但是,大多数作品都忽略了标签的序数信息。在这个项目中,我们提出了一种新型设计MTCSNN,这是一种多任务临床暹罗神经网络,用于糖尿病性视网膜病变严重性预测任务。该项目的新颖性是在标签之间利用序数信息并添加新的回归任务,这可以帮助模型学习更多的歧视性特征,以嵌入细粒度的分类任务。我们对视视视视视视视视视reinamnist进行了全面的实验,将MTCSNN与Resnet-18、34、50等其他模型进行了比较。我们的结果表明,MTCSNN的表现优于测试数据集中的AUC和准确性。
translated by 谷歌翻译
文档视觉问题回答(VQA)旨在了解视觉上富裕的文档,以自然语言回答问题,这是自然语言处理和计算机视觉的新兴研究主题。在这项工作中,我们介绍了一个名为TAT-DQA的新文档VQA数据集,该数据集由3,067个文档页面组成,其中包含半结构化表和非结构化文本以及16,558个问答,通过扩展Tat-QA Dataset。这些文档是从现实世界中的财务报告中取样的,并包含大量数字,这意味着要求离散的推理能力回答该数据集上的问题。基于TAT-DQA,我们进一步开发了一个名为MHST的新型模型,该模型在包括文本,布局和视觉图像在内的多模式中考虑了信息,以智能地以相应的策略(即提取或推理)智能地解决不同类型的问题。广泛的实验表明,MHST模型明显优于基线方法,证明其有效性。但是,表演仍然远远落后于专家人类。我们预计,我们的新Tat-DQA数据集将有助于研究对视觉和语言结合的视觉丰富文档的深入理解,尤其是对于需要离散推理的场景。另外,我们希望拟议的模型能够激发研究人员将来设计更高级的文档VQA模型。
translated by 谷歌翻译
口语理解(SLU)将自动语音识别(ASR)和自然语言理解(NLU)视为一项统一任务,通常遭受数据稀缺。我们基于元辅助学习来利用ASR和NLU联合培训方法,通过仅利用大量的语音数据来提高低资源SLU任务的性能。这种方法的一个明显优势是,它提供了一个灵活的框架来实施低资源的SLU训练任务,而无需访问任何进一步的语义注释。特别是,NLU模型被视为标签生成网络,以预测文本的意图和插槽标签。多任务网络网络从语音同步训练ASR任务和SLU任务;标签生成网络的预测作为语义目标传递到多任务网络。通过公共CATSLU数据集的实验证明了所提出的算法的效率,该数据集对下游NLU任务产生了更合适的ASR假设。
translated by 谷歌翻译
在本文中,我们提出了端到端的水疗形式,以从单个阴影图像中恢复无阴影的图像。与需要两个步骤进行阴影检测然后再删除阴影的传统方法不同,Spa-Former将这些步骤统一为一个,这是一个单阶段网络,能够直接学习阴影和无阴影之间的映射功能,不需要一个单独的阴影检测。因此,SPA形式适应于实际图像去阴影,以适应投影在不同语义区域上的阴影。SPA形式由变压器层和一系列关节傅立叶变压残留块和两轮关节空间注意力组成。本文中的网络能够在达到非常快速的处理效率的同时处理任务。我们的代码在https://github.com/ zhangbaijin/spatial-transformer-shadow-removal上重新发布
translated by 谷歌翻译